Looks like you’ve completely missed the point of SIAI and massively misunderstand AI theory.
It seems to me like you have not even remotely the right order of magnitude of an idea of just how immense the laziness of some programmers can get. And the lazier programmers get, the more they try to write programs that do all their own work for them.
The ultimate achievement of the lazy programmer is to write a one-time program that will anticipate future needs to write programs, and write programs that can better anticipate such future needs and thus better write programs that meet the need, ad infinitum, without any further intervention from said programmer.
SIAI actually agrees that the above is probably not the most economically sensible thing to do and that it is not what most AIs, or even AGIs, developed in the near future will look like. However, SIAI is also aware that some people will, despite this, still want to be the ultimate lazy programmer and write the ultimate recursively self-modifying AI. No reasonable amount of reasonable arguments will change this fact.
Therefore, something must be done to prevent those AIs they will create from exterminating us. SIAI, in no small part through the work of Yudkowsky, has concluded that the best method of achieving this is currently through FAI research, and that eventually the only solution might be to make a Friendly self-modifying AGI before anyone else makes a non-Friendly one, so that the FAI has an unfair advantage and can outsmart any future non-friendly AGIs.
If you want to avoid being logically rude, you will contest either the premises (1: Some people will attempt to make the ultimate AGI. 2: One of them will eventually succeed. 3: Ultimate AGIs are Accidentally Unfriendly by default.) or some element in the chain of reasoning above. If you fail to do so, then the grandparent comment is understating how much you’re missing the point and sidetracking the discussion.
Looks like you’ve completely missed the point of SIAI and massively misunderstand AI theory.
It seems to me like you have not even remotely the right order of magnitude of an idea of just how immense the laziness of some programmers can get. And the lazier programmers get, the more they try to write programs that do all their own work for them.
The ultimate achievement of the lazy programmer is to write a one-time program that will anticipate future needs to write programs, and write programs that can better anticipate such future needs and thus better write programs that meet the need, ad infinitum, without any further intervention from said programmer.
SIAI actually agrees that the above is probably not the most economically sensible thing to do and that it is not what most AIs, or even AGIs, developed in the near future will look like. However, SIAI is also aware that some people will, despite this, still want to be the ultimate lazy programmer and write the ultimate recursively self-modifying AI. No reasonable amount of reasonable arguments will change this fact.
Therefore, something must be done to prevent those AIs they will create from exterminating us. SIAI, in no small part through the work of Yudkowsky, has concluded that the best method of achieving this is currently through FAI research, and that eventually the only solution might be to make a Friendly self-modifying AGI before anyone else makes a non-Friendly one, so that the FAI has an unfair advantage and can outsmart any future non-friendly AGIs.
If you want to avoid being logically rude, you will contest either the premises (1: Some people will attempt to make the ultimate AGI. 2: One of them will eventually succeed. 3: Ultimate AGIs are Accidentally Unfriendly by default.) or some element in the chain of reasoning above. If you fail to do so, then the grandparent comment is understating how much you’re missing the point and sidetracking the discussion.